preserve responses text format and resolve prompt cache retention#313
preserve responses text format and resolve prompt cache retention#313ndycode wants to merge 6 commits intofeat/openai-parity-pr4from
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
📝 WalkthroughWalkthroughthis change adds prompt cache retention configuration support with a three-tier resolution strategy: direct body assignment, provider-specific options, or model config defaults. the transformer integrates this into the existing request body transformation pipeline without altering core control flow. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Notable gaps & concerns
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/public-api-contract.test.ts`:
- Around line 119-133: The contract test in test/public-api-contract.test.ts
currently only compares positional vs named equality, which can miss dropped
fields; update the test to assert explicitly that the transformed outputs
include prompt_cache_retention and the full text.format object (including
type/name/schema/strict and compat_response schema properties) and that their
values match the input; locate the variables used for the transformed results
(e.g., positionalResult/namedResult or whatever the test uses to hold the two
outputs) and add assertions that positionalResult.prompt_cache_retention ===
input.prompt_cache_retention and namedResult.prompt_cache_retention ===
input.prompt_cache_retention, plus deep-equality assertions for
positionalResult.text.format and namedResult.text.format against the original
text.format object (including compat_response schema), so any accidental
stripping is caught.
In `@test/request-transformer.test.ts`:
- Around line 1286-1297: Add a regression test to verify model-level and
named-params overload inheritance of promptCacheRetention: extend
test/request-transformer.test.ts with cases that call transformRequestBody using
a UserConfig where models['some-model'].options.promptCacheRetention is set and
assert result.prompt_cache_retention matches that value, and also call the
named-params overload of transformRequestBody (passing model and input as
separate args) with the same model-level config to ensure the resolver behaves
identically; reference transformRequestBody, UserConfig,
models[...].options.promptCacheRetention, RequestBody and the named-params
invocation form to locate where to add these deterministic vitest assertions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c1964911-d1ca-477b-a78e-7597cc248099
📒 Files selected for processing (4)
lib/request/request-transformer.tslib/types.tstest/public-api-contract.test.tstest/request-transformer.test.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (2)
lib/**
⚙️ CodeRabbit configuration file
focus on auth rotation, windows filesystem IO, and concurrency. verify every change cites affected tests (vitest) and that new queues handle EBUSY/429 scenarios. check for logging that leaks tokens or emails.
Files:
lib/request/request-transformer.tslib/types.ts
test/**
⚙️ CodeRabbit configuration file
tests must stay deterministic and use vitest. demand regression cases that reproduce concurrency bugs, token refresh races, and windows filesystem behavior. reject changes that mock real secrets or skip assertions.
Files:
test/public-api-contract.test.tstest/request-transformer.test.ts
🔇 Additional comments (3)
lib/request/request-transformer.ts (1)
201-211: good precedence and safe assignment for prompt cache retentionthis is clean.
lib/request/request-transformer.ts:201(line 201) andlib/request/request-transformer.ts:920(line 920) implement the expected resolution order and only write when defined, which preserves host intent and avoids clobbering unset requests. coverage is present in vitest attest/request-transformer.test.ts:637andtest/request-transformer.test.ts:1286.Also applies to: 920-923
test/request-transformer.test.ts (1)
637-649: nice precedence coverage for body vs provider optionsthese cases at
test/request-transformer.test.ts:637(line 637) andtest/request-transformer.test.ts:651(line 651) are strong and deterministic, and they directly lock the new precedence behavior.Also applies to: 651-664
lib/types.ts (1)
27-37: type contract update is coherentthe new alias at
lib/types.ts:31(line 31) and its usage inlib/types.ts:27andlib/types.ts:142are consistent and align the config/request contracts without widening toany.Also applies to: 142-142
|
@coderabbitai review |
|
✅ Actions performedReview triggered.
|
1ad9a65 to
b7fef3e
Compare
All review threads are resolved and later commits addressed this stale automated change request.
Summary
text.formatthrough transformer defaults and fast-session tuningprompt_cache_retentionfrom the request, provider options, or model configStack
#312note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this pr fixes two quiet but impactful request-contract bugs:
text.formatwas being silently dropped on transformer re-assignment (now preserved via object spread in both the normal and fast-session paths), andprompt_cache_retentionhad no resolution cascade (now resolved body →providerOptions.openai→modelConfig, consistent with howtextVerbosityandincludework). it also wires thecodex-maxshort alias into the model map and tightens thePromptCacheRetentiontype from barestringto a branded open union for better autocomplete.resolvePromptCacheRetentionfollows the same three-tier priority pattern already established byresolveTextVerbosityandresolveInclude— no concurrency risk, pure function.text.formatpreservation is correct in both the normal path (body.text = { ...body.text, verbosity: ... }) and the fast-session path (body.text = { ...body.text, verbosity: "low" }).codex-maxalias is not asserted inmodel-map.test.ts, and the fast-session +text.formatcombination has no dedicated test.Confidence Score: 5/5
Important Files Changed
resolvePromptCacheRetention(body → providerOptions → modelConfig priority chain) and fixestext.formatpreservation via object spread — logic is correct in both the normal and fast-session paths.PromptCacheRetentionbranded open-string union and wires it intoConfigOptionsandRequestBody— well-typed, autocomplete-friendly pattern.addReasoningAliases("codex-max", "gpt-5.1-codex-max")for the short alias — correct placement, but no corresponding coverage in model-map.test.ts.prompt_cache_retentionpriority chain andtext.formatpreservation on the normal path; fast-session +text.formatcombination is unexercised.prompt_cache_retentionandtext.formatare preserved — solid regression guard.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[incoming RequestBody] --> B{body.prompt_cache_retention set?} B -- yes --> E[use body value] B -- no --> C{providerOptions.openai\n.promptCacheRetention set?} C -- yes --> E C -- no --> D{modelConfig\n.promptCacheRetention set?} D -- yes --> E D -- no --> F[omit field — no-op] E --> G[assign body.prompt_cache_retention] A --> H{body.text.format set?} H -- yes --> I[spread: body.text = ...\nbody.text + verbosity override] H -- no --> I I --> J{fastSession?} J -- yes --> K[spread: body.text = ...\nbody.text + verbosity = low\nformat preserved] J -- no --> L[body.text.format preserved] K --> LPrompt To Fix All With AI
Reviews (5): Last reviewed commit: "Merge main into feat/openai-parity-pr5" | Re-trigger Greptile